The Machine Doesn’t Dream. It Remembers: How AI Turned Culture into Capital and Attention into Control

Digital illustration created with ChatGPT-5.
The machine doesn’t dream. It remembers. And what it remembers, it calls creativity.
Every image, melody, or sentence an AI system produces rests on invisible labour. Millions of artists, writers, and musicians, whose work was scraped, sorted, and repurposed without consent, have unknowingly become its teachers. Their creativity has been converted into data, their styles blurred into algorithms. What began as a shared human culture has quietly become private capital.
What started as the extraction of creativity has now evolved into something broader: the extraction of attention. It is almost poetic that the foundational paper behind today’s AI revolution is titled “Attention Is All You Need.” Intended as a technical description, it now reads like an inadvertent prophecy, not only about machines but about the culture that built them. The same logic that turned art into data now turns behaviour into prediction. Generative AI and social media are two sides of the same coin. Both feed on what we create and feel, translating culture and emotion into profit. Our collective imagination has become both the fuel and the product of these systems.
AI itself is not malevolent. I use it daily to write, code, and test ideas. It feels like an overconfident child: imaginative, quick, but often wrong. It needs supervision and moral guidance. Used consciously, it can amplify human potential rather than replace it, but only if we remember who is in charge.
The real danger lies not in the technology but in the systems that deploy it, and in the motives behind them. For most people, AI is not a co-writer or assistant; it is embedded in the platforms they use every day. When artificial intelligence meets social media, it becomes something far more potent: a machine for persuasion.
When emotion, misinformation, and automation combine, the results are powerful and perilous. We have already seen how algorithms polarise societies, rewarding outrage over thought. AI will only intensify this dynamic, creating personalised versions of reality so convincing that even sceptics may struggle to resist them. Social media amplifies rage-clicks instead of reasoned discussion; AI supercharges that tendency. Consider Operation Overload, a pro-Russian disinformation campaign that used consumer-grade AI tools to generate manipulated images, videos, and voice-cloned recordings in multiple languages. These were designed to erode trust and sow division around immigration, elections, and geopolitics. Together, these forces reveal how social media’s incentive structure (outrage equals engagement) and generative AI’s capabilities (fake yet convincing personas and media) merge to create emotionally charged, tailor-made realities that even discerning users struggle to escape.
Generative AI does not create new problems so much as it amplifies existing ones, particularly the flaws of social media we have ignored for too long. For years, we have pushed responsibility onto individuals: “think critically,” “check your sources,” “curate your feed.” But individual vigilance cannot compete with billion-dollar systems designed to capture attention. The true villain is not the technology itself but the recommendation engines that optimise for consumption over comprehension, engagement over understanding. They turn culture into a feedback loop of outrage and reward, shaping not just what we see but what we believe.
These systems succeed not because they deceive us but because they understand us too well. They feed on the instincts that built civilisation and now threaten to unravel it: curiosity, our hunger to know; ego, our desire to be seen; and outrage, the easiest path to righteousness. Each click is a small act of emotion—curiosity sharpened into appetite, attention converted into data. The algorithms do not invent these drives; they mirror and magnify them, turning human feeling into fuel. The danger is not merely that machines manipulate us, but that they do so by perfecting our own psychological flaws.
That is why regulation matters: not to silence creativity, but to protect truth and trust. We already regulate industries that affect our health, finances, and environment. Why should systems that shape our perception of reality remain ungoverned?
The next frontier of regulation must address the recommendation algorithms that determine what billions of people see and believe. These systems are optimised for engagement, and engagement thrives on outrage, fear, and controversy. As long as attention equals profit, truth will remain a casualty.
Truth does not vanish all at once. It dissolves slowly, under the weight of its imitations. First comes overload - so much information that discernment becomes exhausting. Then distortion - algorithms amplifying what provokes emotion rather than what deserves belief. And finally apathy - a learned helplessness in the face of endless noise. In that fatigue, anything can feel true, and everything can be made to seem false. The erosion of truth is not just a technical failure; it is a psychological one, engineered at scale.
Users deserve both knowledge and agency: to understand why content appears on their screens and to choose the principles guiding those recommendations, such as relevance, diversity, or reliability. We should know who created what we consume, not just what it is. As AI-generated material floods the web, authorship and provenance will matter more than style or apparent quality. The signature, not the sentence, will become the new marker of trust.
There are no simple answers. Technology always outpaces ethics. Yet coexistence is possible: musicians using AI to generate harmonies without surrendering authorship; scientists deploying generative models to design sustainable materials; writers using AI drafts as scaffolding for deeper revision.
The path forward begins with transparency - knowing where data comes from, who profits, and what we trade for convenience. And it continues with responsibility: using these tools as extensions of thought, not replacements for it.
If we pair progress with awareness, AI could become not the end of creativity but its reinvention, a mirror reflecting what we value and fear. But for that to happen, we must stop treating technology as inevitable and start treating it as negotiable.
The zeros and ones that drive our machines are not neutral. They carry the weight of our choices. Whether they empower or manipulate us depends entirely on how we decide to use them.